Goto

Collaborating Authors

 ethnicity race


DemoBias: An Empirical Study to Trace Demographic Biases in Vision Foundation Models

Sufian, Abu, Ghosh, Anirudha, Barman, Debaditya, Leo, Marco, Distante, Cosimo

arXiv.org Artificial Intelligence

Large Vision Language Models (LVLMs) have demonstrated remarkable capabilities across various downstream tasks, including biometric face recognition (FR) with description. However, demographic biases remain a critical concern in FR, as these foundation models often fail to perform equitably across diverse demographic groups, considering ethnicity/race, gender, and age. Therefore, through our work DemoBias, we conduct an empirical evaluation to investigate the extent of demographic biases in LVLMs for biometric FR with textual token generation tasks. We fine-tuned and evaluated three widely used pre-trained LVLMs: LLaVA, BLIP-2, and PaliGemma on our own generated demographic-balanced dataset. We utilize several evaluation metrics, like group-specific BERTScores and the Fairness Discrepancy Rate, to quantify and trace the performance disparities. The experimental results deliver compelling insights into the fairness and reliability of LVLMs across diverse demographic groups. Our empirical study uncovered demographic biases in LVLMs, with PaliGemma and LLaVA exhibiting higher disparities for Hispanic/Latino, Caucasian, and South Asian groups, whereas BLIP-2 demonstrated comparably consistent. Repository: https://github.com/Sufianlab/DemoBias.


Towards Clinical AI Fairness: Filling Gaps in the Puzzle

Liu, Mingxuan, Ning, Yilin, Teixayavong, Salinelat, Liu, Xiaoxuan, Mertens, Mayli, Shang, Yuqing, Li, Xin, Miao, Di, Xu, Jie, Ting, Daniel Shu Wei, Cheng, Lionel Tim-Ee, Ong, Jasmine Chiat Ling, Teo, Zhen Ling, Tan, Ting Fang, RaviChandran, Narrendar, Wang, Fei, Celi, Leo Anthony, Ong, Marcus Eng Hock, Liu, Nan

arXiv.org Artificial Intelligence

The ethical integration of Artificial Intelligence (AI) in healthcare necessitates addressing fairness--a concept that is highly context-specific across medical fields. Extensive studies have been conducted to expand the technical components of AI fairness, while tremendous calls for AI fairness have been raised from healthcare. Despite this, a significant disconnect persists between technical advancements and their practical clinical applications, resulting in a lack of contextualized discussion of AI fairness in clinical settings. Through a detailed evidence gap analysis, our review systematically pinpoints several deficiencies concerning both healthcare data and the provided AI fairness solutions. We highlight the scarcity of research on AI fairness in many medical domains where AI technology is increasingly utilized. Additionally, our analysis highlights a substantial reliance on group fairness, aiming to ensure equality among demographic groups from a macro healthcare system perspective; in contrast, individual fairness, focusing on equity at a more granular level, is frequently overlooked. To bridge these gaps, our review advances actionable strategies for both the healthcare and AI research communities. Beyond applying existing AI fairness methods in healthcare, we further emphasize the importance of involving healthcare professionals to refine AI fairness concepts and methods to ensure contextually relevant and ethically sound AI applications in healthcare.